Showing posts with label Financial Services. Show all posts
Showing posts with label Financial Services. Show all posts

The win in Monaco and a hot season continues



Mark Webber's winning dive into the pool, he's mid air!

As Red Bull Racing prepares for their upcoming race events, a recent highlight was at the Monaco Grand Prix where the team won the race for the third consecutive time! It was a nail biting race and that also resulted in a second consecutive win for Mark Webber in Monaco.

Platform Compuitng  logo on the car.  Very cool.
The partnership between Red Bull Racing and Platform Computing, an IBM company is an exciting one, seeing the team rise to the top and win the most prestigious race on the calendar… again was truly amazing! It was nothing but smiles on the Energy Station on race day. A big CONGRATULATIONS to the team and the best of luck as the season pushes ahead!

For more information on the partnership between Platform Computing and Red Bull Racing see
www.ibm.com/platformcomputing or check out the video or read the case study.

Next up the Grand Prix of Europe.

Why Combine Platform Computing with IBM?

You may have read the news that Platform Computing has signed a definitive agreement to be acquired by IBM and you may wonder why. I’d like to share with you our thinking at Platform and what our relevance is to you and the dramatic evolution of enterprise computing. Even though not an old man yet and usually too busy doing stuff, for once I will try to look at the past, present and future.

After the first two generations of IT architecture, centralized mainframe and networked client/server, IT has finally advanced to its third generation architecture of (true) distributed computing. An unlimited number of resource components, such as servers, storage devices and interconnects, are glued together by a layer of management software to form a logically centralized system – call it virtual mainframe, cluster, grid, cloud, or whatever you want. The users don’t really care where the “server” is, as long as they can access application services – probably over a wireless connection. Oh, they also want those services to be inexpensive and preferably on a pay-for-use basis. Like a master athlete making the most challenging routines look so easy, such a simple computing model actually calls for a sophisticated distributed computing architecture. Users’ priorities and budgets differ, apps’ resource demands fluctuate, and the types of hardware they need vary. So, the management software for such a system needs to be able to integrate whatever resources, morph them dynamically to fit each app’s needs, arbitrate amongst competing apps’ demands, and deliver IT as a service as cost effectively as possible. This idea gave birth to commodity clusters, enterprise grids, and now cloud. This has been the vision of Platform Computing since we began 19 years ago.



Just as client/server took 20 years to mature into the mainstream, clusters and grids have taken 20 years, and cloud for general business apps is still just emerging. Two areas have been leading the way: HPC/technical computing followed by Internet services. It’s no accident that Platform Computing was founded in 1992 by Jingwen Wang and I, two renegade Computer Science professors with no business experience or even interest. We wanted to translate ‘80s distributed operating systems research into cluster and grid management products. That’s when the lowly x86 servers were becoming powerful enough to do the big proprietary servers’ job, especially if a bunch of them banded together to form a cluster, and later on an enterprise grid with multiple clusters. One system for all apps, shared with order. Initially, we talked to all the major systems vendors to transfer university research results to them, but there was no taker. So, we decided to practice what we preached. We have been growing and profitable all these 19 years with no external funding. Using clusters and grids, we replaced a supercomputer at Pratt & Whitney to run 10 times more Boeing 777 jet engine simulations, and we supported CERN to process insane amounts of data looking for God’s Particle. While the propeller heads were having fun, enterprises in financial services, manufacturing, pharmaceuticals, oil & gas, electronics, and the entertainment industries turned to these low cost, massively parallel systems to design better products and devise more clever services. To make money, they compute. To out-compete, they out-compute.

The second area adopting clusters, grids and cloud, following HPC/technical computing, is Internet services. By the early 2000s, business at Amazon and Google was going gangbusters, yet they wanted a more cost effective and infinitely scalable system versus buying expensive proprietary systems. They developed their own management software to lash together x86 servers running Linux. They even developed their own middleware, such as MapReduce for processing massive amounts of “unstructured” data.



This brings us to the present day and the pending acquisition of Platform by IBM. Over the last 19 years, Platform has developed a set of distributed middleware and workload and resource management software to run apps on clusters and grids. To keep leading our customers forward, we have extended our software to private cloud management for more types of apps, including Web services, MapReduce and all kinds of analytics. We want to do for enterprises what Google did for themselves, by delivering the management software that glues together whatever hardware resources and applications these enterprises use for production. In other words, Google computing for the enterprise. Platform Computing.



So, it’s all about apps (or IaaS J). Old apps going distributed, new apps built as distributed. Platform’s 19 years of profitable growth has been fueled by delivering value to more and more types of apps for more and more customers. Platform has continued to invest in product innovation and customer services.

The foundation of this acquisition is the ever expanding technical computing market going mainstream. IDC has been tracking this technical computing systems market segment at $14B, or 20% of the overall systems market. It is growing at 8%/year, or twice the growth rate of servers overall. Both IDC and users also point out that the biggest bottleneck to wider adoption is the complexity of clusters and grids, and thus the escalating needs for middleware and management software to hide all the moving parts and just deliver IT as a service. You see, it’s well worth paying a little for management software to get the most out of your hardware. Platform has a single mission: to rapidly deliver effective distributed computing management software to the enterprise. On our own, especially in the early days when going was tough, we have been doing a pretty good job for some enterprises in some parts of the world. But, we are only 536 heroes. Combined with IBM, we can get to all the enterprises worldwide. We have helped our customers to run their businesses better, faster, cheaper. After 19 years, IBM convinced us that there can also be a “better, faster, cheaper” way to help more customers and to grow our business. As they say, it’s all about leverage and scale.

We all have to grow up, including the propeller heads. Some visionary users will continue to buy the pieces of hardware and software to lash together their own systems. Most enterprises expect to get whole systems ready to run their apps, but they don’t want to be tied down to proprietary systems and vendors. They want choices. Heterogeneity is the norm rather than exception. Platform’s management software delivers the capabilities they want while enabling their choices of hardware, OS and apps. 


IBM’s Systems and Technology Group wants to remain a systems business, not a hardware business nor a parts business. Therefore, IBM’s renewed emphasis is on systems software in its own right. IBM and Platform, the two complementary market leaders in technical computing systems and management software respectively, are coming together to provide overall market leadership and help customers to do more cost effective computing. In IBM speak, it’s smarter systems for smarter computing enabling a Smarter Planet. Not smarter people. Just normal people doing smarter things supported by smarter systems.

Now that I hopefully have you convinced that we at Platform are not nuts coming together with IBM, we hope to show you that Platform’s products and technologies have legs to go beyond clusters and grids. After all, HPC/technical computing has always been a fountainhead of new technology innovation feeding into the mainstream. Distributed computing as a new IT architecture is one such example. Our newer products for private cloud management, Platform ISF, and for unstructured data analytics, Platform MapReduce, are showing some early promise, even awards, followed by revenue. 

IBM expects Platform to operate as a coherent business unit within its Systems and Technology Group. We got some promises from folks at IBM. We will accelerate our investments and growth. We will deliver on our product roadmaps. We will continue to provide our industry-best support and services. We will work even harder to add value to our partners, including IBM’s competitors. We want to make new friends while keeping the old, for one is silver while the other is gold. We might even get to keep our brand name. After all, distributed computing needs a platform, and there is only one Platform Computing. We are an optimistic bunch. We want to deliver to you the best of both worlds – you know what I mean. Give us a chance to show you what we can do for you tomorrow. Our customers and partners have journeyed with Platform all these years and have not regretted it. We are grateful to them eternally.

So, with a pile of approvals, Platform Computing as a standalone company may come to an end, but the journey continues to clusters, grids, clouds, or whatever you want to call the future. The prelude is drawing to a close, and the symphony is about to start. We want you to join us at this show.

Thank you for listening.

Blog Series – Five Challenges for Hadoop MapReduce in the Enterprise, Part 2

Challenge #2: Current Hadoop MapReduce implementations lack flexibility and reliable resource management

As outlined in Part 1 of this series, here at Platform, we’ve identified five significant challenges that we believe are currently hindering Hadoop MapReduce adoption in enterprise environments.  The second challenge, addressed here, is a lack of flexibility and resource management provided by the open source solutions currently on the market.

Current Hadoop MapReduce implementations derived from open source are not equipped to address the dynamic resource allocation required by various applications.  They are also susceptible to single points of failure on HDFS NameNode, as well as on JobTracker. As mentioned in Part 1,  these shortcomings are due to the fundamental architectural design in the open source implementation in which the job tracker is not separated from the resource manager.  As IT continues its transformation from a cost center to a service-oriented organization, the need for an enterprise–class platform capable of providing services for multiple lines of business will rise.  In order to support MapReduce applications running in a robust production environment, a runtime engine offering dynamic resource management (such as borrowing and lending capabilities) is critical for helping IT deliver its services to multiple business units while meeting their service level agreements.  Dynamic resource allocation capabilities promise to not only yield extremely high resource utilization but also eliminate IT silos, therefore bringing tangible ROI to enterprise IT data centers.

Equally important is high reliability. An enterprise-class MapReduce implementation must be highly reliable so there are no single points of failure. Some may argue that the existing solution in Hadoop MapReduce has shown very low rates of failure and therefore reliability is not of high importance.  However, our experience and long history of working with enterprise-class customers has proved that in mission critical environments, the cost of one failure is measured in millions of dollars and is in no way justifiable for the organization. Eliminating single points of failure could significantly minimize the downtime risk for IT. For many organizations, that translates to faster time to results and higher profits.

Blog Series – Five Challenges for Hadoop MapReduce in the Enterprise


With the emergence of “big data” has come a number of new programming methodologies for collecting, processing and analyzing the large volume and often unstructured data. Although Hadoop MapReduce is one of the promising approaches for processing and organizing results from unstructured data, the engine running underneath MapReduce applications  is not yet enterprise ready. At Platform Computing, we have identified five major challenges in the current Hadoop MapReduce implementation:

·         Lack of performance and scalability
·         Lack of flexible and reliable resource management
·         Lack of application deployment support
·         Lack of quality of service
·         Lack of multiple data source support

I will be taking an in-depth look at each of the above challenges in this blog series. To finish, I will share our  vision on what an enterprise–class solution should be  that will not only address the five challenges customers are currently facing,  but also expand beyond those boundaries to explore the capabilities  of the next generation Hadoop MapReduce runtime engine.

 Challenge #1:  Lack of performance and scalability

Currently the open source Hadoop MapReduce programming model does not provide the performance and scalability needed for production environment, this is mainly due to its fundamental architectural design.   On the performance measure, to be most useful in a robust enterprise environment a MapReduce job should take  sub-millisecond to start,  but the job startup time in the current open source MapReduce implementation is measured in seconds. This high latency at the beginning can lead to subsequent delays in getting to the final results and cause significant financial loss to an organization. For instance, in capital markets of the financial service sector, a millisecond of delay can cost a firm millions of dollars.  On the scalability front, customers are looking for a runtime solution that is not only capable of  scaling one MapReduce application as the problem size grows,  but one that can also support multiple applications of different  kinds running across thousands of cores and servers at the same time.  The current Hadoop MapReduce implementation does not provide such capabilities. As a result, for each MapReduce job, a customer has to assign a dedicated cluster to run that particular application, one at a time.  This lack of scalability will not only introduce additional complexity into an already complex IT data center and make it hard to manage, but it also creates a siloed IT environment in which resources are poorly utilized.

A lack of guaranteed performance and scalability is just one of the roadblocks preventing enterprise customers from running MapReduce applications at production scale.  In the next blog,   we will discuss the shortcomings in resource management in the current Hadoop MapReduce offering and examine the impact it brings to organizations tackling “Big Data” problems.

Private Cloud at State Street




There is nothing better than a real world customer case study to explain the value of deploying a product. Cut through the marketing and explain:
  1. The underlying problem / pain points
  2. The vendor selection process
  3. Key decisions and lesson learned
  4. Actual value being achieved

At the recent Wall Street & Technology Capital Markets Cloud Symposium, the Chief Infrastructure Architect from State Street, Kevin Sullivan, discussed how cloud – specifically private cloud – is being rolled out across 20+ projects.


For more details about Mr. Sullivan's presentation, there is a great article in Advanced Trading magazine by Phil Albinus. Mr. Albinus provides details about the decision making process, key vendors, and next steps. To learn more about Platform Computing's cloud solution, please visit our private cloud solution area.



Some additional metrics that were discussed include:

  • Started in 2009
  • Went from 400 to 150 use cases
  • Operated a 6 month proof-of-concept (POC)
  • Down-selected from 150 technology partners to a ‘handful’
  • Provisioning has been reduced from 8 weeks to hours
  • Unique Active-Active two data center configuration for 100% app high availability

One of the four key elements to the State Street cloud environment is the ‘cloud controller’. In their environment, the cloud controller manages self-service, configurations, provisioning, and automation of the application construction within the infrastructure. A key benefit for the developers was the ability to get access to more self-service capacity on-demand much faster. This easily outweighed any cultural concerns regarding not being able to specify to the n-th degree an IT environment request. If needed, IT could still request ‘custom’ but it would take longer.


In addition, Mr. Sullivan highlighted several key design principles driving the design, building, and management of the cloud:

  • Simplification – creating of standards, consolidation, and self-service
  • Automation – for deployment, metrics, elasticity, and monitoring
  • Leverage – commodity hardware and software stack with a focus on reuse of the platform, services, and data

The benefits and successes of private cloud at State Street are becoming increasingly known as their CIO Christopher Perretta is fond of discussing the topic. Some additional resources to learn more about the State Street cloud project are:

What do you think can be learned from State Street’s successes?

Financial Services Market Going Strong & Platform Symphony 5.1 Launched

I am happy to report that over the last two quarters, Platform has won several significant multi-million dollar deals in both the capital markets and insurance sectors, and expanded its portfolio of Fortune 500 customers. These landmark sales indicate growing demand for Platform’s products including our cloud management technology, Platform ISF, and Platform’s Symphony, which we are announcing our latest version of today. In fact, Platform now counts many of Wall Street’s largest banks and trading firms among its flagship customers, including 12 of the Bloomberg 20. We also had two significant customer wins in Bloomberg’s top 10 listing in the past fiscal year, which means Platform can now count the majority (six) of the top 10 banks & financial institutions among its customer base!

Today’s release of Platform Symphony 5.1 comes at an important time marked by increased economic and regulatory challenges that are driving organizations to reevaluate their approaches to risk management and compliance. Products and portfolios in both the financial services and enterprise sectors, as well as the sheer amount of complex data organizations need to process has today created a demand for powerful, flexible systems that provide accurate, real-time, counter-party risk assessments. Symphony is an enterprise-class solution that improves efficiency for these distributed service oriented architecture (SOA) applications, enabling a variety of business and financial risk applications, including real-time pricing, value-at-risk and Monte Carlo simulations—all at a lower cost. Platform Symphony also provides distributed computing workload solutions for analytics processing, such as Complex Event Processing (CEP), Extract Transform and Load (ETL) and Business Intelligence.

Platform Symphony 5.1, now with easier application onboarding, GPU support and increased scalability, will be generally available at the end of this month. To read more about its updated features and Platform’s strong showing in FS sales, please check out our press room for the official announcement.

HPC from A-Z (part 9) - I

Insurance


Like all financial services companies today, insurance companies (whether national or global in scale) are under constant pressure. The rise of consumer empowerment through the growth of online comparison sites means insurance companies’ websites need to manage increased volume of traffic, requiring additional processing power. Meanwhile, the reporting requirements from the regulatory environment are unrelenting. And this all comes against a backdrop of consumer uncertainty about the economic future and an incredibly competitive industry.


Recently I was impressed with the bold action a major international insurance customer, specializing in retail services, took to plan ahead for these types of challenges. The company wanted to support strategic decision-making around the complex issue of the ‘inherited estate’ by scaling out from a small, overloaded cluster to hundreds of machines. It also needed to manage the workload brought on by the strict UK regulatory environment – PriceWaterhouseCoopers had dauntingly explained it would need a ‘weapon of mass computation’ to achieve this.


HPC gave the company the ability to run up to 10 times more scenario models at once, providing it with the information needed to make a strategic decision not to reallocate inherited estate money. However, the overall strategic value gained is something which the wider insurance industry would benefit from considering.


While this customer’s approach is not unique, there are still many companies in the industry that have not implemented tools that allow them to improve the performance of their compute applications and hardware. And beyond this, there’s even more value to be derived from existing HPC infrastructure: for example, extending modelling to Individual Capital Assessments. These tools help firms identify major sources of risk within their business (credit, market, liquidity, operational and insurance risks) and perform appropriate stress scenario testing for each of the identified risks. Surely this is hugely desirable, if not essential, for the industry at large?

R” integrated with Symphony

In my previous blog, I mentioned that Platform Symphony can be used in statistical computing. “R” is a programming language and software environment for statistical computing and graphics. The R language has become a de facto standard among statisticians for developing statistical software, and is widely used for statistical software development and data analysis. R is an implementation of the S programming language combined with lexical scoping semantics inspired by Scheme. Below is an example of how R can be used to speed calculations:



Sample code of R:

func <- function(x, y, z) { some operations }
data1 <- c(1:20000)
data2 <- c(20000:1)
result <- data1
for (i in 1:500)
{
iresult <- mapply(func, data1, data2, i) # Time-consuming operation
result <- result + iresult
}
Problem:

One of our customers was running that on single node and it took around 5 minutes to
complete on single core. Is there a way to parallelize the above code and make use of multiple cores to speed up the calculation?

Solution:

We integrated “R” with Symphony and ran the same script that took five minutes on single core, now took around 30 seconds on 10 cores. With Platform Symphony, one gets linear speedup and virtually unlimited scale.

Platform Symphony outside Financial Services

Although the majority of Platform Symphony’s customers are in the Financial Services sector, Platform Symphony’s core value propositions (faster, cheaper with more agility) also apply to a variety of industries outside of Financial Services.

For example, Platform Symphony is widely used for Monte Carlo simulations in finance. Those same methods has actually also been applied to fields as diverse as space exploration, oil exploration and predicting cost or schedule over-runs. Specific application areas that make use of Monte-Carlo simulation include:

  • Reliability engineering (CAE)
  • Bio-molecular simulations, computation chemistry (Pharma)
  • Particle physics & big research (Government)
There are also other applications that can utilize the power of Platform Symphony--for example, statistical computing and numerical computing. In my next blog post, I’ll talk about how just how Symphony can be applied in statistical computing scenarios.

Platform Symphony for Financial Services

One of the principal applications for Platform Symphony is risk management. While risk management techniques vary by customer type, all financial services companies face the same basic challenge of modeling multiple scenarios describing different risk factors so as to maximize gains and minimize the potential for financial loss.

Here are some examples of problems financial services customers are typically trying to solve:
  • Value at Risk (VaR) – end of day / intraday – Monte Carlo simulation based or based on historical market data
  • Pricing model validation / back-testing
  • Model changing risk intraday to drive hedging strategies
  • Sensitivity analysis / “what-if” analysis
  • Accurately measure risk to optimize capital reserves
  • Counterparty credit risk analysis
  • New product development
  • Pre-trade limit checks
  • Calculate key measures like CVA on a pre-deal basis
  • Timely reporting to meet regulatory requirements
  • New regulatory requirements driving increased need for HPC

In all of the above examples, customers parallelize the application(s) via APIs and tools provided by Symphony. Fine grained units of workload are submitted to Symphony. Symphony schedules those units of workload to the all of the available resources based on the configured policies.

HPC from A-Z (part 6) - F

F is for financial services…

Every day financial traders work at the speed of light to deal in a wide variety of investment banking products. To stay ahead of the game, they rely heavily on large numbers of complex and compute-intensive data risk analysis and commodity pricing calculations. Without these, it would simply be impossible to turn a profit!

As the last few years take their toll on the industry, competitive pressure has rocketing up the Richter scale. Banks now need to provide results faster than ever. However, they face a rapidly rising volume of information as well as industry regulations which place extraordinary demand on existing computing resources.

Banks like Svenska Handelsbaken, Société Générale and Citigroup have turned to Platform Symphony to help make light work of heavy numbers, securing their positions as leaders in the industry.

With statistics such as computer hardware utilization increasing by as much as 60% as a result of implementation, it’s no surprise that grid technologies are at the heart of the financial world.

Waters USA 2010

Waters USA 2010 was held on December 6, 2010 at Marriot Marquis Hotel at Times Square. I was not sure what to expect as this was my first time attending this show. I must say that it was very well attended; my guess is that over 500 people attended the event. There were representatives from quite a few of the major Wall Street firms: JP Morgan Chase, Credit Suisse, Royal Bank of Scotland, Morgan Stanley, Deutsche Bank and Bank of America were on hand to discuss the most interesting topics in financial services today. An especially hot topic was High Frequency Trading (HFT).

The keynote was presented by Andrew Silverman from Morgan Stanley. He presented quite a few interesting facts; for example, S&P volatility compared to price in 2008/2009 was > 6%, this was the highest that we have seen; Average trade size is lower than it has been historically, however the trade volume is 7 times more than 2004. The average volume of trades in US markets is around 12 billion shares; in2010 it was about 8 billion - that is quite a drop! Is that a trend or just a blip? One other interesting thing that was mentioned was that it is ”guestimated” that 50-70% of volume is from High Frequency Trading.

There were quite a few panel discussions on topics including how firms were trying to drive by increasing the utilization of their existing assets/infrastructure to deliver services in cost effective way, as well as some discussing regulations on HFT, risk management and controls, while continuing to be innovative and drive revenue growth.

Public sentiment as well as regulatory bodies felt the FS Industry have not done enough to self regulate as Issues like the “flash crash” and order leakage (IOIs) will likely force government regulators to impose regulations now. Nobody knows exactly what the future regulation looks like at this point, but they know they are coming.

The main takeaways for me:
1. Firms need to try and gain additional advantages from existing infrastructure to support the growth of the business. This could include the use of grid software, private clouds and hybrid clouds (private/public clouds) which can be provided by vendors like Platform Computing.

2. Additional regulations, whatever shape or form they come in, will require IT infrastructure to be adaptive and flexible in terms of providing more computing power when it is needed to deal with the multitudes of computations and reporting requirements to comply with these regulations. This means that with more risk management controls and reporting transparency it will put more pressure on infrastructure and that means you will either need to build more or find ways to get more out of the existing servers.

In both cases, partnering with Platform Computing to continue to grow the business, meet new regulatory requirements and still meet business service levels is the key to firms being successful in the future, whatever it may hold.

Waters USA 2010 - NYC

This year’s Waters USA December 6th, 2010 Conference was one of the biggest and best attended conferences I have been at all year. Held at the Marriot Marquis, which is in the heart of Times Square, was truly a great preview in terms of shedding light on where the Financial Services Industry will be heading in 2011. The show’s focus was on how FS firms continue to leverage technology to achieve their trading and business strategies as well as deal with new and pending regulation. Platform Computing was one of the event sponsors, along with IBM and Microsoft, HP and Bloomberg, which shows who they look to when trying to meet these tough challenges.

The event’s
program read like a who’s who in terms of the major FS firms in the industry. Morgan Stanley, Credit Suisse, Goldman Sachs and JP Morgan Chase all had experts on hand to discuss their most pressing issues around High-Frequency Trading (HFT), Risk Management and Controls and how firms can drive revenue and still meet regulatory guidelines using technologies like Platform LSF and Platform Symphony grid computing and Platform ISF cloud computing.

The keynote address, given by Andrew Silverman who is responsible for electronic trading at Morgan Stanley, explained effectively how the “regulatory pendulum” is starting to swing back towards regulatory mandated rules from organizations like SEC, Federal Reserve Bank, FSA (Financial Services Agency) and Japan FSA. Mr. Silverman felt FS firms needed to wake up and self-regulate before regulators mandate rules that are more costly to implement and ineffective to boot. He explained how technology is an enabler that firms could not do without it. He also noted firms need to use technology ethically, such as by putting the client’s needs in front of the brokers such as in the use of smart order routing technology to achieve best execution.

At the CIO/CTO roundtable which followed, Peter Kelso (Global CIO, DB Advisors), Michael Radziemski (Partner and CIO, Lord Abbett & Co), Scott Marcar (Global Head Risk & Finance Technology, RBS) and Peter Richards (Managing Director and CTO Global Head of Production and Infrastructure, JP Morgan Chase) strongly agreed that improving their existing infrastructures is a top priority. They all felt that their firms need to continue to break down their operating silos and change their infrastructures to increase efficiency, utilization, and to lower costs. All participants stated that their IT budgets will remain “flat” in 2011, even though they are expected to “do more”. They agreed this would only by possible by leveraging existing infrastructure better, with technologies such grid and cloud computing products, which will be a key focus for all their firms.

One other panel worth mentioning was the “Past the cloud computing hype: Constructing a safe cloud environment”, where two FS technology executives: Michael Ryan (Director, Bank of America) and Uche Abalogu (CTO, Harbinger Capital Partners) and two vendors: Dr. Songnian Zhou (CEO, Platform Computing) and Matt Blythe (Product Manager, Technical Computing, Microsoft) discussed the merits and challenges of cloud computing. In summary, the panelists believe FS firms are first looking at better utilizing their existing infrastructure resources by building their own private clouds. While firms are exploring hybrid clouds and expect that this is eventually where they will head, security, data management and latency remain key issues in their adoption.

In summary it seems that building private clouds was where the focus was now and that partnering with a vendor like Platform who has experience in leveraging existing infrastructure to maximize efficiency and utilization, while still being able to meet business service levels is key in being successful.

Type of analytics

In previous blog, I spoke about running Analytics on a grid comprised of commodity hardware. In this blog I will discuss the different types of analytics and where it is used. Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields. Sales analytics is the process of determining how successful a company's sales forecast is and how best to predict future sales. In Financial Services, Credit Scoring analytics is used to determine future risk payments. Financial service firms also do rigorous Risk & Credit analysis which is required by industry regulation and provides insight into the exposure of firms to different types of risk. There are other types of analytics: Marketing analytics, Collections analytics, Fraud analytics, pricing analytics, Supply Chain analytics, Telecommunications, Transportation analytics.

Platform Symphony is the leading grid management middleware solution that can be used to accelerate advanced analytics and/or “do more with less”. In next blog, we’ll discuss how Platform Symphony can be used to reduce the runtime of the analytics or do more analysis with the same hardware.

Analytics on a compute grid?

Analytics programs have become an increasingly important tool for many companies to help provide insight into how the business is performing. The need to measure performance and gauge ROI across lines of business has been particularly important during the global recession over the past two years. At the same time, the amount of data that companies amass daily and must process to analyze performance has exploded. As a result, analytics programs are becoming increasingly bloated, expensive and difficult to run within many organizations.

This problem is due in part to the fact that a lot of analytics are running on legacy environments (including Hardware, software, processes). These environments are very expensive to own and maintain in terms of both CapEx and OpEx. Given the markets trends of huge increase in data volumes and users and analytical models requiring near real-time service level agreements (SLA), these legacy systems have become very expensive and are beginning to hinder, rather than enable, the growth of business. Furthermore, scalability and performance issues are also beginning to hinder ease of adoption and use for BI tools.

What is the alternative? Running the analytics applications on a compute grid comprised of commodity hardware. This enables the adoption of a lower cost, higher performance approach by simplifying the environment and making it easier for users to move to model. Rather than overtaxing legacy systems, grid solutions allow you to make the most of the systems you already have in place. Grid provides high availability, lower costs and virtually unlimited scalability.

Platform Symphony is the leading grid management middleware solution that enables embarrassingly high parallelization of high performance compute and data intensive applications using policy based grid management that enforces application service levels. Platform Symphony also manages a shared pool of resources to meet the dynamic demands of multiple applications, enabling enterprise-class availability, security, and scalability and providing maximum utilization across large, diverse pools of resources.

SIBOS off and running

With SIBOS kicking into high gear this week, it’s hard not to think about the significant changes that have gone on in the banking industry over the past few years. The most obvious reflection of this lies in the key themes of the conference. The idea of ‘rebuilding trust’ addresses how banks can tackle the conflict between reducing risk and bringing down cost while ‘recovery’ looks at leveraging technology to innovate and capitalize on the improving climate. ‘Regulation’ also tops the agenda and will no doubt remain a C-level priority for many years to come.

Tying all of these themes together is the role of technology in helping organizations overcome the obstacles they face. After all, data proliferation, and the need to better manage it, is at the root of many challenges being faced by financial institutions today. For example, our recent survey found that 66% of buy-side firms and 56% of sell-side firms are grappling with siloed data sources.

Looking at some of our recent work, it is interesting to see how financial services firms are trying to tackle these areas. SepSvenska Handelsbanken is using high-performance computing to run core applications for trading and risk management. New regulations around risk in Sweden are leading to more risk scenarios which need to be run so more compute power allows the bank to plan ahead much more easily. Other financial institutions including Citigroup are also running similar projects.

As the event continues through the week, it will be interesting to see the different ways financial organizations are tackling these challenges. Hopefully, we will finally begin to see an end to the stormy crisis clouds and a sunny future shining though.

By Jeff Hong, CFA
Director of Financial Services Industry Marketing
Platform Computing

Calling Dr. Feelgood Again!

On the first day of SIBOS here in cold Amsterdam, I attended the market structures keynote and panel session chaired by Chris Skinner, CEO of Balatro. The session opened amicably with Alan Cameron of BNP Paribas exploring the impacts of the Code of Conduct and T2S initiative on the pan-European Settlement and Clearing landscape. All-in-all, he concluded that the markets are silo’ed and need to change. Either the markets consolidate into one global entity or that they interoperate with one another. His analogy for the prescription for change was that of a doctor presenting three methods for weight loss to a patient.

• Option 1: Go on diet (harmonization via baby steps).
• Option 2: Exercise (harmonization via interoperability).
• Option 3: Liposuction: (harmonization by regulation e.g. T2S). He concluded that all three were needed.

Before you say, “Where is this going?” I’m going to say the above describes eerily the silo’ed infrastructures found in many large FS firms. Despite banks’ greater focus on efficiency, there remain barriers to consolidation for many reasons such as interoperability and regulations. As we have seen here at Platform Computing, when firms finally move to a shared pool of resources, they gain economy of scales and IT agility. And like in the Clearing and Settlement space, the way to a shared service operation will likely be due to a combination of diet, exercise, and nip-tuck. More on that tomorrow.

By Jeff Hong, CFA
Director of Financial Services Industry Marketing
Platform Computing

Paging “Dr. Feelgood”

Hello I'm Jeff Hong, Financial Services Industry Marketing at Platform Computing, recently we and our partner, SAS, sponsored a survey with Wall Street and Technology (WS&T) of leading capital markets firms on analytics (http://www.grid-analytics.wallstreetandtech.com/index.jhtml). While many of survey's results were par for course, a few findings raised the collective eyebrows here at the company.
Number One "Raise My Brow" Finding: "A significant 36 percent of buy-side firms and 30 percent of sell-side firms run coun­terparty risk analytics only on an ad hoc basis, or not at all". Uh huh... Number Two "Raise My Brow" Finding: "Two-thirds of respondents were not confident that their current analytic platform/infrastructure will continue to keep up over time". Uh Oh!

But won't Dodd Frank and its ilk - designed to minimize risk taken by banks and by implication - require more risk analytics? Well yes and no. There appear to be multiple opinions, some of which include:

• Regulations will dampen enthusiasm for OTC derivatives, favoring exchange traded assets. In this case, risk analytics become simpler.

• Investment banks will divest or shift their prop trading desks to other divisions e.g. asset management. In this case, it's no longer their concern.

• Regulations like Basel III will engender more risk, as banks need to get same return with less capital in play. More risk analytics are needed.

• Compliance will take years, so regulators and banks have time to make adjustments and plans. More or less risk analytics . unclear.

Add in the debate on flash trading, and the future of trading and therefore risk analytics becomes unclear. What is clear is that if your firm analyzes counterparty risk on an ad hoc basis or not at all, this is a real problem.

Accurately calculating your risk exposure - be it counterparty risk, liquidity risk, et. al. - and therefore your firm's health is always a good idea. Good risk practices will set your firm above the current and future debate on risk and regulations. Investing in your analytics infrastructure - be it new servers, better resource management with grid, and smarter data management - should be a priority. According to the same WS&T survey, financial institutions are looking to cluster, grid, and cloud software technologies. Within the next two years, 51 percent of survey respondents are considering or likely to invest in cluster technology, 53 percent are considering or likely to buy grid technology, and 57 percent are considering or likely to purchase cloud technology.

If your firm is planning to or is already doing more analysis, good for you! Otherwise take a couple of those blue or pink pills, and hope the headache goes away.

The Business of IT?

Two sets of recent discussions highlighted the business role of private clouds for me. The first set of conversations was at the Wall Street and Technology roundtable in NYC September 20th. Leading financial institutions gathered to discuss the role and challenges of big analytics. The second set of conversations was with our Platform ISF customers who are building private clouds.

My takeaway from Wall Street and Technology event was that IT silos dominate in areas of high growth or in periods of high growth. In a fast growing business, cost containment is a lower priority and business owners and units have more leeway to build their own systems. To paraphrase one comment: “When our stock is at $100, centralization and consolidation are not big issues, when our stock is at $10, sharing and metering become forefront”.

My takeaway from customer discussions is that virtualization is an operational issue while clouds are business model issue. For organizations that have concluded that the place to start with cloud is private cloud, the choice of a vendor/partner becomes strategic because the business model implications of cost and control are the strategic reason to consider cloud in the first place.

In our blog post “a strategy straight out of Redmond”, we discussed the agendas of each vendor camp. The point is that several of those camps would prefer to make the decision for private cloud to be an operational one more so than a strategic one. For many customers, private clouds may be more of an operational decision but many are seeing cloud for what it is; the chance for a new IT order with the customer in control. A year ago, a long time industry analyst told me “the difference between grid and cloud is that grid was driven by the academics and IT whereas cloud is being driven by the business”. This applies to virtualization as well.

So if we’re in a period of centralization and consolidation for many IT groups and the private cloud management layer is strategic for many firms, the choice of your vendor camp becomes as or more important that the particular vendor that you chose. In other words the business model is likely to drive the architectural model, rather than the other way around.

Will the CIO allow the frog to be boiled?

What do you get when you combine a grass roots solution portfolio and adoption strategy with enterprise pricing schemes? Answer: A very attractive short-term value proposition that is sure to put many customers on the path to vCloud.

What’s the problem? Cost and vendor lock-in.

Cloud computing is the third major wave in the history of distributed computing and offers the promise of open systems enabling customer leverage through commodity IT components. However, incumbent vendors are fighting desperately to create integrated proprietary stacks that protect their core products and licensing models.

This battle for leverage between big vendors and customers comes down to a basic architectural decision: Does the CIO and senior architect team have an open systems vision for private cloud that they are willing to pursue in-spite of bundled offerings from vendors?

Platform Computing is betting that a large portion of the market will architect their private cloud platforms with heterogeneity and commodity as key principals.


Anyone who doubts that the frog is getting warm or that the intent for customer lock-in is real needs only to read Microsoft’s ad decrying vendor lock-in.